Goto

Collaborating Authors

 binary accuracy


Guardian: Detecting Robotic Planning and Execution Errors with Vision-Language Models

Pacaud, Paul, Garcia, Ricardo, Chen, Shizhe, Schmid, Cordelia

arXiv.org Artificial Intelligence

Robust robotic manipulation requires reliable failure detection and recovery. Although current Vision-Language Models (VLMs) show promise, their accuracy and generalization are limited by the scarcity of failure data. To address this data gap, we propose an automatic robot failure synthesis approach that procedurally perturbs successful trajectories to generate diverse planning and execution failures. This method produces not only binary classification labels but also fine-grained failure categories and step-by-step reasoning traces in both simulation and the real world. With it, we construct three new failure detection benchmarks: RLBench-Fail, BridgeDataV2-Fail, and UR5-Fail, substantially expanding the diversity and scale of existing failure datasets. We then train Guardian, a VLM with multi-view images for detailed failure reasoning and detection. Guardian achieves state-of-the-art performance on both existing and newly introduced benchmarks. It also effectively improves task success rates when integrated into a state-of-the-art manipulation system in simulation and real robots, demonstrating the impact of our generated failure data. Code, Data, and Models available at https://www.di.ens.fr/willow/research/guardian/.


Beyond Words: How Large Language Models Perform in Quantitative Management Problem-Solving

Kuzmanko, Jonathan

arXiv.org Artificial Intelligence

This study examines how Large Language Models (LLMs) perform when tackling quantitative management decision problems in a zero-shot setting. Drawing on 900 responses generated by five leading models across 20 diverse managerial scenarios, our analysis explores whether these base models can deliver accurate numerical decisions under varying presentation formats, scenario complexities, and repeated attempts. Contrary to prior findings, we observed no significant effects of text presentation format (direct, narrative, or tabular) or text length on accuracy. However, scenario complexity -- particularly in terms of constraints and irrelevant parameters -- strongly influenced performance, often degrading accuracy. Surprisingly, the models handled tasks requiring multiple solution steps more effectively than expected. Notably, only 28.8\% of responses were exactly correct, highlighting limitations in precision. We further found no significant ``learning effect'' across iterations: performance remained stable across repeated queries. Nonetheless, significant variations emerged among the five tested LLMs, with some showing superior binary accuracy. Overall, these findings underscore both the promise and the pitfalls of harnessing LLMs for complex quantitative decision-making, informing managers and researchers about optimal deployment strategies.


Exploring the Limitations of Graph Reasoning in Large Language Models

Agrawal, Palaash, Vasania, Shavak, Tan, Cheston

arXiv.org Artificial Intelligence

Pretrained Large Language Models have demonstrated various types of reasoning capabilities through language-based prompts alone. However, in this paper, we test the depth of graph reasoning for 5 different LLMs (GPT-4, GPT-3.5, Claude-2, Llama-2 and Palm-2) through the problems of graph reasoning. In particular, we design 10 distinct problems of graph traversal, each representing increasing levels of complexity. Further, we analyze the performance of models across various settings such as varying sizes of graphs as well as different forms of k-shot prompting. We highlight various limitations, biases, and properties of LLMs through this benchmarking process, such as an inverse relation to the average degrees of freedom of traversal per node in graphs, the overall negative impact of k-shot prompting on graph reasoning tasks, and a positive response bias which prevents LLMs from identifying the absence of a valid solution. Finally, we propose a new prompting technique specially designed for graph traversal tasks, known as PathCompare, which shows a notable increase in the performance of LLMs in comparison to standard prompting and CoT.


Analyzing Multispectral Satellite Imagery of South American Wildfires Using Deep Learning

Sun, Christopher

arXiv.org Artificial Intelligence

Since frequent severe droughts are lengthening the dry season in the Amazon Rainforest, it is important to detect wildfires promptly and forecast possible spread for effective suppression response. Current wildfire detection models are not versatile enough for the low-technology conditions of South American hot spots. This deep learning study first trains a Fully Convolutional Neural Network on Landsat 8 images of Ecuador and the Galapagos, using Green and Short-wave Infrared bands to predict pixel-level binary fire masks. This model achieves a 0.962 validation F2 score and a 0.932 F2 score on test data from Guyana and Suriname. Afterward, image segmentation is conducted on the Cirrus band using K-Means Clustering to simplify continuous pixel values into three discrete classes representing differing degrees of cirrus cloud contamination. Three additional Convolutional Neural Networks are trained to conduct a sensitivity analysis measuring the effect of simplified features on model accuracy and train time. The Experimental model trained on the segmented cirrus images provides a statistically significant decrease in train time compared to the Control model trained on raw cirrus images, without compromising binary accuracy. This proof of concept reveals that feature engineering can improve the performance of wildfire detection models by lowering computational expense.


Real-Time Topology Optimization in 3D via Deep Transfer Learning

Behzadi, MohammadMahdi, Ilies, Horea T.

arXiv.org Artificial Intelligence

The published literature on topology optimization has exploded over the last two decades to include methods that use shape and topological derivatives or evolutionary algorithms formulated on various geometric representations and parametrizations. One of the key challenges of all these methods is the massive computational cost associated with 3D topology optimization problems. We introduce a transfer learning method based on a convolutional neural network that (1) can handle high-resolution 3D design domains of various shapes and topologies; (2) supports real-time design space explorations as the domain and boundary conditions change; (3) requires a much smaller set of high-resolution examples for the improvement of learning in a new task compared to traditional deep learning networks; (4) is multiple orders of magnitude more efficient than the established gradient-based methods, such as SIMP. We provide numerous 2D and 3D examples to showcase the effectiveness and accuracy of our proposed approach, including for design domains that are unseen to our source network, as well as the generalization capabilities of the transfer learning-based approach. Our experiments achieved an average binary accuracy of around 95% at real-time prediction rates. These properties, in turn, suggest that the proposed transfer-learning method may serve as the first practical underlying framework for real-time 3D design exploration based on topology optimization


Classifying Toxic Comments with Natural Language Processing

#artificialintelligence

Regardless of whether you have a Medium account, Youtube channel, or play League of Legends, you have probably seen toxic comments somewhere on the internet before. Toxic behavior, which includes rude, hateful, and threatening actions, is an issue that stops a productive comment thread, and turns it into a battle. Needless to say, developing and artificial intelligence to identify and classify toxic comments would greatly help many online groups and communities. The data for this project can be found on Kaggle. This data set contains hundreds of thousands of comments, each labelled with some of the following traits: toxic, severe toxic, obscene, threat, insult, and identity hate. Here are two examples of a toxic comment, and a non-toxic comment with their labels.


Improving Consensus Accuracy via Z-Score and Weighted Voting

Jung, Hyun Joon (University of Texas at Austin) | Lease, Matthew (University of Texas at Austin)

AAAI Conferences

Using supervised and unsupervised features individually or together, we (a) detect and filter out noisy workers via Z-score, and (b) weight worker votes for consensus labeling. We evaluate on noisy labels from Amazon Mechanical Turk in which workers judge Web search relevance of query/document pairs. In comparison to a majority vote baseline, results show a 6% error reduction (48.83% to 51.91%) for graded accuracy and 5% error reduction (64.88% to 68.33%) for binary accuracy.